Exploiting Preference Elicitation in Interactive and User-centered Algorithmic Recourse: An Initial Exploration
Seyedehdelaram Esfahani, Giovanni De Toni, Bruno Lepri, Andrea Passerini, Katya Tentori, Massimo Zancanaro
https://arxiv.org/abs/2404.05270
Preliminary Guidelines For Combining Data Integration and Visual Data Analysis
Adam Coscia, Ashley Suh, Remco Chang, Alex Endert
https://arxiv.org/abs/2403.04757
Introducing cosmosGPT: Monolingual Training for Turkish Language Models
H. Toprak Kesgin, M. Kaan Yuce, Eren Dogan, M. Egemen Uzun, Atahan Uz, H. Emre Seyrek, Ahmed Zeer, M. Fatih Amasyali
https://arxiv.org/abs/2404.17336
#bitBlog Introducing HTTPie Desktop https://blog.bitexpert.de/blog/introducing_httpie_desktop
Artificial intelligence for context-aware visual change detection in software test automation
Milad Moradi, Ke Yan, David Colwell, Rhona Asgari
https://arxiv.org/abs/2405.00874
Decoy Effect In Search Interaction: Understanding User Behavior and Measuring System Vulnerability
Nuo Chen, Jiqun Liu, Hanpei Fang, Yuankai Luo, Tetsuya Sakai, Xiao-Ming Wu
https://arxiv.org/abs/2403.18462
60/ Das ist Daniel. Er ist 16. Man sieht ihn auch auf dem Bild 56/.
Ich habe ihn letztes Jahr zum ersten Mal fotografiert und dann immer mal wieder, ohne mitzukriegen, dass es dieselbe Person ist. Damals hatte ich mit ihm gesprochen und er hatte mir seinen Namen gesagt. Am Sonnabend habe ich gehört, wie er seinen Namen und sein Alter in einem Interview genannt hat. Während er weggetragen wurde. Dann konnte ich die Bilder zuordnen. Ich habe viele Bilder mit Schmerzgriffen von ihm. Es gibt absurde Szenen von der Elsenbrücke.
#LetzteGeneration
Decoy Effect In Search Interaction: Understanding User Behavior and Measuring System Vulnerability
Nuo Chen, Jiqun Liu, Hanpei Fang, Yuankai Luo, Tetsuya Sakai, Xiao-Ming Wu
https://arxiv.org/abs/2403.18462
SwipeGANSpace: Swipe-to-Compare Image Generation via Efficient Latent Space Exploration
Yuto Nakashima, Mingzhe Yang, Yukino Baba
https://arxiv.org/abs/2404.19693 https://arxiv.org/pdf/2404.19693
arXiv:2404.19693v1 Announce Type: new
Abstract: Generating preferred images using generative adversarial networks (GANs) is challenging owing to the high-dimensional nature of latent space. In this study, we propose a novel approach that uses simple user-swipe interactions to generate preferred images for users. To effectively explore the latent space with only swipe interactions, we apply principal component analysis to the latent space of the StyleGAN, creating meaningful subspaces. We use a multi-armed bandit algorithm to decide the dimensions to explore, focusing on the preferences of the user. Experiments show that our method is more efficient in generating preferred images than the baseline methods. Furthermore, changes in preferred images during image generation or the display of entirely different image styles were observed to provide new inspirations, subsequently altering user preferences. This highlights the dynamic nature of user preferences, which our proposed approach recognizes and enhances.
This https://arxiv.org/abs/2310.09611 has been replaced.
link: https://scholar.google.com/scholar?q=a